Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Multi-scale object detection algorithm based on improved YOLOv3
Liying ZHANG, Chunjiang PANG, Xinying WANG, Guoliang LI
Journal of Computer Applications    2022, 42 (8): 2423-2431.   DOI: 10.11772/j.issn.1001-9081.2021060984
Abstract471)   HTML21)    PDF (1714KB)(210)       Save

In order to further improve the speed and precision of multi-scale object detection, and to solve the situations such as miss detection, wrong detection and repeated detection caused by small object detection, an object detection algorithm based on improved You Only Look Once v3 (YOLOv3) was proposed to realize automatic detection of multi-scale object. Firstly, the network structure was improved in the feature extraction network, and the attention mechanism was introduced into the spatial dimensions of residual module to pay attention to small objects. Then, Dense Convulutional Network (DenseNet) was used to fully integrate shallow information of the network, and the depthwise separable convolution was used to replace the normal convolution of the backbone network, thereby reducing the number of model parameters and improving the detection speed. In the feature fusion network, the bidirectional fusion of the shallow and deep features was realized through the bidirectional feature pyramid structure, and the 3-scale prediction was changed to 4-scale prediction, which improved the learning ability of multi-scale features. In terms of loss function, Generalized Intersection over Union (GIoU) was selected as the loss function, so that the precision of identifying objects was increased, and the object miss rate was reduced. Experimental results show that on Pascal VOC datasets, the mean Average Precision (mAP) of the improved YOLOv3 algorithm is as high as 83.26%, which is 5.89 percentage points higher than that of the original YOLOv3 algorithm, and the detection speed of the improved algorithm reaches 22.0 frame/s. Compared with the original YOLOv3 algorithm on Common Objects in COntext (COCO) dataset, the improved algorithm has the mAP improved by 3.28 percentage points. At the same time, in multi-scale object detection, the mAP of the algorithm has been improved, which verifies the effectiveness of the object detection algorithm based on the improved YOLOv3.

Table and Figures | Reference | Related Articles | Metrics
Depth image super-resolution based on shape-adaptive non-local regression and non-local gradient regularization
Yingying ZHANG, Chao REN, Ce ZHU
Journal of Computer Applications    2022, 42 (6): 1941-1949.   DOI: 10.11772/j.issn.1001-9081.2021040594
Abstract247)   HTML15)    PDF (3318KB)(75)       Save

To deal with the low resolution of depth images and blurring depth discontinuities, a depth image super-resolution method based on shape-adaptive non-local regression and non-local gradient regularization was proposed. To explore the correlation between non-local similar patches of depth image, a shape-adaptive non-local regression method was proposed. The shape-adaptive self-similarity patch was extracted for each pixel, and a similar pixel group for the target pixel was constructed according to its shape-adaptive patch. Then for each pixel in the similar pixel group, a non-local weight was obtained with the assistant of the high-resolution color image of the same scene, thereby constructing the non-local regression prior. To maintain the edge information of the depth image, the non-locality of the gradient of the depth image was explored. Different from the Total Variation (TV) regularization which assumed that all pixels obeyed Laplacian distribution with zero mean value, through non-local similarity of the depth image, the gradient mean value of specific pixel was estimated by non-local patches, and the gradient distribution of each pixel was fit by using the learned mean value. Experimental results show that compared with Edge Inconsistency Evaluation Model (EIEM) on Middlebury datasets, the proposed method decreases Mean Absolute Difference (MAD) of 41.1% and 40.8% respectively.

Table and Figures | Reference | Related Articles | Metrics
Prediction method of liver transplantation complications based on transfer component analysis and support vector machine
Hongliang CAO, Ying ZHANG, Bin WU, Fanyu LI, Xubo NA
Journal of Computer Applications    2021, 41 (12): 3608-3613.   DOI: 10.11772/j.issn.1001-9081.2021060886
Abstract247)   HTML5)    PDF (699KB)(70)       Save

Many machine learning algorithms can cope well with prediction and classification, but these methods suffer from poor prediction accuracy and F1 score when they are used on medical datasets with small samples and large feature spaces. To improve the accuracy and F1 score of liver transplantation complication prediction, a prediction and classification method of liver transplantation complications based on Transfer Component Analysis (TCA) and Support Vector Machine (SVM) was proposed. In this method, TCA was used for mapping and dimension reduction of the feature space, and the source domain and the target domain were mapped to the same reproducing kernel Hilbert space, thereby achieving the adaptivity of edge distribution. The SVM was trained in the source domain after transferring, and the complications were predicted in the target domain after training. In the liver transplantation complication prediction experiments for complication Ⅰ, complication Ⅱ, complication Ⅲa, complication Ⅲb, and complication Ⅳ, compared with the traditional machine learning and Heterogeneous Domain Adaptation (HDA), the accuracy of the proposed method was improved by 7.8% to 42.8%, and the F1 score reached 85.0% to 99.0%, while the traditional machine learning and HDA had high accuracy but low recall due to the imbalance of positive and negative samples. Experimental results show that TCA combined with SVM can effectively improve the accuracy and F1 score of liver transplantation complication prediction.

Table and Figures | Reference | Related Articles | Metrics
Fast haze removal algorithm for single image based on human visual characteristics
ZHANG Hongying ZHANG Sainan WU Yadong WU Bin
Journal of Computer Applications    2014, 34 (6): 1753-1757.   DOI: 10.11772/j.issn.1001-9081.2014.06.1753
Abstract271)      PDF (953KB)(409)       Save

In order to remove the effect of weather in degraded image, a fast haze removal algorithm for single image based on human visual characteristics was proposed. According to the luminance distribution of the hazy image and the human visual characteristics, the proposed method first applied luminance component to estimate coarse transmission map, then used a linear spatial filter to refine the transmission map and obtained the dehazed image by the atmospheric scattering model. Finally a new image enhancement fitting function was applied to enhance the luminance component of the dehazed image to make it more natural and clear. The experimental results show that the proposed algorithm effectively removes haze and is better than the existing algorithms in terms of contrast, information entropy and computing time.

Reference | Related Articles | Metrics
MLEM low-dose CT reconstruction algorithm based on variable exponent anisotropic diffusion and non-locality
ZHANG Fang CUI Xueying ZHANG Quan DONG Chanchan SUN Weiya BAI Yunjiao GUI Zhiguo
Journal of Computer Applications    2014, 34 (12): 3605-3608.  
Abstract202)      PDF (803KB)(639)       Save

Concerning the serious recession problems of the low-dose Computed Tomography (CT) reconstruction images, a low-dose CT reconstruction method of MLEM based on non-locality and variable exponent was presented. Considering the traditional anisotropic diffusion noise reduction is insufficient, variable exponent which could effectively compromise between heat conduction and anisotropic diffusion P-M models, and the similarity function which could detect the edge and details instead of gradient were applied to the traditional anisotropic diffusion, so as to achieve the desired effect. In each iteration, firstly, the basic MLEM algorithm was used to reconstruct the low-dose projection data. And then the diffusion function was improved by the non-local similarity measure, variable index and fuzzy mathematics theory, and the improved anisotropic diffusion was used to denoise the reconstructed image. Finally median filter was used to eliminate impulse noise points in the image. The experimental results show the proposed algorithm has a smaller numerical value than OS-PLS (Ordered Subsets-Penalized Least Squares), OS-PML-OSL (Ordered Subsets-Penalized Maximum Likelihood-One Step Late), and the algorithm based on the traditional PM, in the variance of Mean Absolute Error (MAE), and Normalized Mean Square Distance (NMSD), especially its Signal-to-Noise Ratio (SNR) is up to 10.52. This algorithm can effectively eliminate the bar of artifacts, and can keep image edges and details information better.

Reference | Related Articles | Metrics
Demodulation algorithm design of VHF data broadcast signal
ZHANG Kunfeng GUO Ying ZHANG Guoxiang ZHAO Yang
Journal of Computer Applications    2013, 33 (10): 2739-2741.  
Abstract639)      PDF (535KB)(627)       Save
In order to enhance the performance of the synchronization and demodulation, a Very high frequency (VHF) Data Broadcast (VDB) signal demodulation algorithm based on the solution of differential equation was proposed. This algorithm eliminated the synchronization performance deterioration caused by the frequency offset. And frame synchronization, bit synchronization, frequency offset estimation and correction could be completed within a single set of synchronization symbols. The simulation results show that the method is effective to enhance the VDB signal demodulation performance.
Related Articles | Metrics
Non-local means denoising approach based on dictionary learning
CUI Xueying ZHANG Quan GUI Zhiguo
Journal of Computer Applications    2013, 33 (05): 1420-1422.   DOI: 10.3724/SP.J.1087.2013.01420
Abstract873)      PDF (529KB)(582)       Save
Concerning the measurement of the similarity of non-local means, a method based on dictionary learning was presented. First, block matching based local pixel grouping was used to eliminate the interference by dissimilar image blocks. Then, the corrupted similar blocks were denoised by dictionary learning. As a further development of classical sparse representation model, the similar patches were unified for joint sparse representation and learning an efficient and compact dictionary by principal component analysis, so that the similar patches relevency could be well preserved. This similarity between the pixels was measured by the Euclidean distance of denoised image blocks,which can well show the similarity of the similar blocks. The experimental results show the modified algorithm has a superior denoising performance than the original one in terms of both Peak Signal-to-Noise Ratio (PSNR) and subjective visual quality. For some images whose structural similarity is large and with rich detail information, their structures and details are well preserved. The robustness of the presented method is superior to the original one.
Reference | Related Articles | Metrics
Fuzzy multi-objective group decision making based on interval-valued intuitionistic fuzzy set
WANG Huiying ZHANG Chaokun DONG Dong
Journal of Computer Applications    2013, 33 (04): 967-970.   DOI: 10.3724/SP.J.1087.2013.00967
Abstract703)      PDF (547KB)(461)       Save
In order to improve the accuracy of data decision making, an optimization approach was proposed for the algorithm of multi-objectives group decision making. By the interval-valued intuitionistic fuzzy set theory, the optimization was approached by the method of iterative computation gradually in the situation that the part weight information was incomplete usually. Results of the simulation indicate that the algorithm has low time complexity, and can be implemented in the computer easily; moreover, it also shows the effectiveness and accuracy of the algorithm.
Reference | Related Articles | Metrics
Hierarchical model management framework based on universal relation model representation
XING Ying ZHANG Hongjun ZHANG Rui HE Jian
Journal of Computer Applications    2013, 33 (03): 849-853.   DOI: 10.3724/SP.J.1087.2013.00849
Abstract698)      PDF (792KB)(463)       Save
The exiting model representation cannot meet the requirements of multistage modeling, so model share, reuse and management can hardly achieve in multistage modeling process. Therefore, a hierarchical model management framework based on universal relation model presentation was presented. Firstly, the requirements of model representation in model management and the limitations of exiting model representation were analyzed, then a model representation based on universal relation was investigated to set the mapping relation between layers of conceptual model and mathematical model, and the integrative model representation and the hierarchical model management framework including conceptual model, mathematical model and physical model were set up. At last, the logic of modelbase in management framework was designed and the physical model generation based on universal relation was investigated. The model of different modeling process could be managed uniformly based on an integrative model representation.
Reference | Related Articles | Metrics
Automatic annotation of auxiliary words usage in rule-based Chinese language
HAN Ying-jie ZAN Hong-ying ZHANG Kun-li CAI Yu-mei
Journal of Computer Applications    2011, 31 (12): 3271-3274.  
Abstract1262)      PDF (597KB)(704)       Save
Existing results of auxiliary word are difficult to use in the automatic annotation of natural language processing. Based on the auxiliary words knowledge base, rule-based method is used in automatic annotation of auxiliary words usage. Contrast to the results of test, it shows that refining, extension and adjusting the matching order of the rules can promote the precision and recall effectively. It is also benefit for improve the quality of Chinese Corpus, deepen the processing depth, and reduce the artificial work.
Related Articles | Metrics
Bottleneck bandwidth measurement and localization based on meltable packet train
Min-zheng JIA Yuan-zhong ZHU Zhen-wei YU Ying ZHANG
Journal of Computer Applications    2011, 31 (07): 1934-1938.  
Abstract1235)      PDF (861KB)(773)       Save
The paper analyzed the current measurement technologies and the existing localization methods of bottleneck bandwidth. Based on the analysis and comparison of their advantages and disadvantages, the paper put forward a new method of bottleneck bandwidth measurement and localization on the basis of meltable packet train method. The method can measure bottleneck bandwidth and localize it at the same time, which reduces the times and load of measurement effectively. In addition, the paper also proves the validity of the method from the theory and simulation experiment, and points out its advantages compared with other methods.
Reference | Related Articles | Metrics
Application on call signal system and call control of HMP
Xue-Hua LIAO Jing-wei MIAO Zhou-sen ZHU Ying ZHANG Xiao-ning LI
Journal of Computer Applications   
Abstract1451)      PDF (523KB)(882)       Save
The communication signal system and signal control of Host Media Processing (HMP) were studied. One new model of communication using HMP, which combined SIP and MS RCT as communication protocols, was proposed. The procedure of improving the identification of Dual Tone Multi Frequency (DTMF) under this model was provided, and the methods of reducing the transportation time of signal were informed.
Related Articles | Metrics
CAVLC coding algorithm and FPGA realization of a high-speed entropy encoder
Xiao-Ming LIU Xu-Ying ZHANG Fang LI
Journal of Computer Applications   
Abstract2098)      PDF (1137KB)(918)       Save
Context-Based Adaptive Variable-Length Coding (CAVLC) algorithm was adopted as an entropy coding method in baseline and extended profile of H.264/AVC standard, but the detailed syntax on which was not explicitly stipulated. A profound analysis on the CAVLC coding algorithm of H.264 standard was performed based on the principle of CALVC decoding method. A high-speed and low power-consumption CAVLC coder for H.264/AVC standard was presented according to the former analysis, in which multi-clock domain processing and parallel processing techniques were adopted to improve the performance of the system, and arithmetic were used to replace some static code table to reduce memory consumption. The detailed design and FPGA realization method on each sub-blocks are also concerned. Finally, FPGA verification and realization indicates that the maximum coding system clock can up to 107.97MHz, and the coding delay is less than 36 clock cycles, which can adequately meet the needs of some high-definition and real-time applications.
Related Articles | Metrics
Medical Image Registration Method Based on Mixed Mutual Information
Hong-Ying ZHANG
Journal of Computer Applications   
Abstract1777)      PDF (715KB)(877)       Save
Traditionally, the similarity metric is based on Shannon's entropy. Through the analysis of Renyi's entropy, it is found that Renyi's entropy can remove some unwanted local optimum, smooth out difficult optimization terrain accordingly; Shannon's entropy has the "depth" of the basin of attraction, making the registration function easier to be optimized. So a new similarity measure based on mixed mutual information was proposed. The measures based on different entropy were used in different searching phases, and global optimization algorithm and local one were used individually. At first, the global optimization algorithm was used to find the local extrema of generalized mutual information measure based on Renyi's entropy. Then, the local one was used to locate the global optimal solution by searching the current local optimal ones, and the generalized mutual information measure based on Shannon's entropy was taken as the objective function.
Related Articles | Metrics